In this paper, we propose a novel 3D graph convolution based pipeline for category-level 6D pose and size estimation from monocular RGB-D images. The proposed method leverages an efficient 3D data augmentation and a novel vector-based decoupled rotation representation. Specifically, we first design an orientation-aware autoencoder with 3D graph convolution for latent feature learning. The learned latent feature is insensitive to point shift and size thanks to the shift and scale-invariance properties of the 3D graph convolution. Then, to efficiently decode the rotation information from the latent feature, we design a novel flexible vector-based decomposable rotation representation that employs two decoders to complementarily access the rotation information. The proposed rotation representation has two major advantages: 1) decoupled characteristic that makes the rotation estimation easier; 2) flexible length and rotated angle of the vectors allow us to find a more suitable vector representation for specific pose estimation task. Finally, we propose a 3D deformation mechanism to increase the generalization ability of the pipeline. Extensive experiments show that the proposed pipeline achieves state-of-the-art performance on category-level tasks. Further, the experiments demonstrate that the proposed rotation representation is more suitable for the pose estimation tasks than other rotation representations.
translated by 谷歌翻译
Image quality assessment (IQA) forms a natural and often straightforward undertaking for humans, yet effective automation of the task remains highly challenging. Recent metrics from the deep learning community commonly compare image pairs during training to improve upon traditional metrics such as PSNR or SSIM. However, current comparisons ignore the fact that image content affects quality assessment as comparisons only occur between images of similar content. This restricts the diversity and number of image pairs that the model is exposed to during training. In this paper, we strive to enrich these comparisons with content diversity. Firstly, we relax comparison constraints, and compare pairs of images with differing content. This increases the variety of available comparisons. Secondly, we introduce listwise comparisons to provide a holistic view to the model. By including differentiable regularizers, derived from correlation coefficients, models can better adjust predicted scores relative to one another. Evaluation on multiple benchmarks, covering a wide range of distortions and image content, shows the effectiveness of our learning scheme for training image quality assessment models.
translated by 谷歌翻译
对于视觉操作任务,我们旨在表示具有语义上有意义的功能的图像内容。但是,从图像中学习隐式表示通常缺乏解释性,尤其是当属性交织在一起时。我们专注于仅从2D图像数据中提取删除的3D属性的具有挑战性的任务。具体而言,我们专注于人类外观,并从RGB图像中学习穿着人类的隐性姿势,形状和服装表示。我们的方法学习了这三个图像属性的分解潜在表示的嵌入式,并通过2到3D编码器解码器结构可以有意义地重新组装特征和属性控制。 3D模型仅从学到的嵌入空间中的特征图推断出来。据我们所知,我们的方法是第一个解决这个高度不足的问题的跨域分解的方法。我们在定性和定量上证明了框架在虚拟数据上3D重建中转移姿势,形状和服装的能力,并显示隐性形状损失如何使模型恢复细粒度重建细节的能力有益。
translated by 谷歌翻译
尽管最近在手动和对象数据集中进行了准确的3D注释做出了努力,但3D手和对象重建仍然存在差距。现有作品利用接触地图来完善不准确的手动姿势构成估计,并在给定的对象模型中生成grasps。但是,它们需要明确的3D监督,因此很少可用,因此仅限于受限的设置,例如,热摄像机观察到操纵物体上剩下的残留热量。在本文中,我们提出了一个新颖的半监督框架,使我们能够从单眼图像中学习接触。具体而言,我们利用大规模数据集中的视觉和几何一致性约束来在半监督学习中生成伪标记,并提出一个有效的基于图形的网络来推断联系。我们的半监督学习框架对接受“有限”注释的数据培训的现有监督学习方法取得了良好的改进。值得注意的是,与常用的基于点网的方法相比,我们所提出的模型能够以不到网络参数和内存访问成本的一半以下的一半获得卓越的结果。我们显示出使用触点图的好处,该触点图规则手动相互作用以产生更准确的重建。我们进一步证明,使用伪标签的培训可以将联系地图估计扩展到域外对象,并在多个数据集中更好地概括。
translated by 谷歌翻译
我们提出了一种新颖的优化框架,其基于用户描述和美学作证给定图像。与现有的图像裁剪方法不同,其中通常会列举深网络以回归裁剪参数或裁剪动作,我们建议通过重新修复在图像标题和美学任务上的预先训练的网络,而无需任何微调,我们建议直接优化裁剪参数。从而避免训练单独的网络。具体而言,我们搜索最大限度地减少这些网络初始目标的组合损失的最佳作物参数。为了使优化表提出三种策略:(i)多级双线性采样,(ii)退火的作物区域的规模,因此有效地减少了多种优化结果的参数空间,(iii)聚合。通过各种定量和定性评估,我们表明我们的框架可以产生与预期用户描述和美学令人愉悦的作物。
translated by 谷歌翻译
间接飞行时间(I-TOF)成像是由于其小尺寸和价格合理的价格导致移动设备的深度估计方式。以前的作品主要专注于I-TOF成像的质量改进,特别是固化多路径干扰(MPI)的效果。这些调查通常在特定约束的场景中进行,在近距离,室内和小环境光下。令人惊讶的一点工作已经调查了现实生活场景的I-TOF质量改善,其中强烈的环境光线和远距离由于具有限制传感器功率和光散射而导致的诱导射击噪声和信号稀疏引起的困难。在这项工作中,我们提出了一种基于新的学习的端到端深度预测网络,其噪声原始I-TOF信号以及RGB图像基于涉及隐式和显式对齐的多步方法来解决它们的潜在表示。预测与RGB视点对齐的高质量远程深度图。与基线方法相比,我们在挑战真实世界场景中测试了挑战性质场景的方法,并在最终深度地图上显示了超过40%的RMSE改进。
translated by 谷歌翻译
Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase. The resulting network resembles a static entity of knowledge, with endeavours to extend this knowledge without targeting the original task resulting in a catastrophic forgetting. Continual learning shifts this paradigm towards networks that can continually accumulate knowledge over different tasks without the need to retrain from scratch. We focus on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries. Our main contributions concern (1) a taxonomy and extensive overview of the state-of-the-art; (2) a novel framework to continually determine the stability-plasticity trade-off of the continual learner; (3) a comprehensive experimental comparison of 11 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize method strengths and weaknesses on three benchmarks, considering Tiny Imagenet and large-scale unbalanced iNaturalist and a sequence of recognition datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time and storage.
translated by 谷歌翻译
The current trend of applying transfer learning from CNNs trained on large datasets can be an overkill when the target application is a custom and delimited problem with enough data to train a network from scratch. On the other hand, the training of custom and lighter CNNs requires expertise, in the from-scratch case, and or high-end resources, as in the case of hardware-aware neural architecture search (HW NAS), limiting access to the technology by non-habitual NN developers. For this reason, we present Colab NAS, an affordable HW NAS technique for producing lightweight task-specific CNNs. Its novel derivative-free search strategy, inspired by Occam's razor, allows it to obtain state-of-the-art results on the Visual Wake Word dataset in just 4.5 GPU hours using free online GPU services such as Google Colaboratory and Kaggle Kernel.
translated by 谷歌翻译
With the development of depth sensors in recent years, RGBD object tracking has received significant attention. Compared with the traditional RGB object tracking, the addition of the depth modality can effectively solve the target and background interference. However, some existing RGBD trackers use the two modalities separately and thus some particularly useful shared information between them is ignored. On the other hand, some methods attempt to fuse the two modalities by treating them equally, resulting in the missing of modality-specific features. To tackle these limitations, we propose a novel Dual-fused Modality-aware Tracker (termed DMTracker) which aims to learn informative and discriminative representations of the target objects for robust RGBD tracking. The first fusion module focuses on extracting the shared information between modalities based on cross-modal attention. The second aims at integrating the RGB-specific and depth-specific information to enhance the fused features. By fusing both the modality-shared and modality-specific information in a modality-aware scheme, our DMTracker can learn discriminative representations in complex tracking scenes. Experiments show that our proposed tracker achieves very promising results on challenging RGBD benchmarks. Code is available at \url{https://github.com/ShangGaoG/DMTracker}.
translated by 谷歌翻译
由于与传统的基于RGB的跟踪相比,多模式跟踪的能力在复杂的情况下更准确和健壮,因此获得了关注。它的关键在于如何融合多模式数据并减少模式之间的差距。但是,多模式跟踪仍然严重遭受数据缺乏症的影响,从而导致融合模块的学习不足。我们没有在本文中构建这样的融合模块,而是通过将重要性附加到多模式的视觉提示中,为多模式跟踪提供了新的视角。我们设计了一种新型的多模式及时跟踪器(Protrack),可以通过及时范式将多模式输入传递到单个模态。通过最好地利用预先训练的RGB跟踪器在大规模学习的跟踪能力,我们的突起即使没有对多模式数据进行任何额外的培训,我们的突起也可以通过更改输入来实现高性能多模式跟踪。 5个基准数据集的广泛实验证明了所提出的突起的有效性。
translated by 谷歌翻译